Normalization vs Standardization

Introduction:

In data analysis and machine learning, preprocessing steps such as data normalization and standardization are crucial for improving the performance and interpretability of models. This Jupyter Notebook provides an overview of the importance of data normalization and standardization in preparing data for analysis and modeling.

Importance:

  1. Data Normalization:
    • Uniform Scaling: Ensures all features are scaled to a similar range, preventing dominance by features with larger scales.
    • Improved Convergence: Facilitates faster convergence in optimization algorithms by making the loss surface more symmetric.
    • Interpretability: Easier interpretation as values are on a consistent scale, aiding in comparison and understanding of feature importance.
  2. Data Standardization:
    • Mean Centering: Transforms data to have a mean of 0 and a standard deviation of 1, simplifying interpretation of coefficients in linear models.
    • Handling Different Scales: Useful when features have different scales or units, making them directly comparable.
    • Reducing Sensitivity to Outliers: Less affected by outliers compared to normalization, leading to more robust models.
    • Maintaining Information: Preserves relative relationships between data points without altering the distribution shape.
import numpy as np

Normalization

  1. Normalization: Normalization typically refers to scaling numerical features to a common scale, often between 0 and 1. This is usually done by subtracting the minimum value and then dividing by the range (maximum - minimum). Normalization is useful when the distribution of the data does not follow a Gaussian distribution (Normal Distribution).
# Data Normalization without libraries:
def minMaxScaling(data):
    min_val = min(data)
    max_val = max(data)
    
    scaled_data = []
    for value in data:
        scaled = (value - min_val) / (max_val - min_val)
        scaled_data.append(scaled)
    return scaled_data
# Example data
data = np.array([10, 20, 30, 40, 50])
normalized_data = minMaxScaling(data)
print("Normalized data (Min-Max Scaling):", normalized_data)
Normalized data (Min-Max Scaling): [0.0, 0.25, 0.5, 0.75, 1.0]
from sklearn.preprocessing import MinMaxScaler

# Sample data
data = np.array([[1, 2], [3, 4], [5, 6]])

# Create the scaler
scaler = MinMaxScaler()

# Fit the scaler to the data and transform the data
normalized_data = scaler.fit_transform(data)

print("Original data:")
print(data)
print("\nNormalized data:")
print(normalized_data)
Original data:
[[1 2]
 [3 4]
 [5 6]]

Normalized data:
[[0.  0. ]
 [0.5 0.5]
 [1.  1. ]]

Standardization

  1. Standardization: Standardization, often implemented with a method like z-score standardization, transforms the data to have a mean of 0 and a standard deviation of 1. This means that the data will have a Gaussian distribution (if the original data had a Gaussian distribution). In Python, you typically use the StandardScaler from the sklearn.preprocessing module to standardize data.
def zScoreNormalization(data):
    mean = sum(data) / len(data)
    variance = sum((x - mean) ** 2 for x in data) / len(data)
    std_dev = variance ** 0.5
    standardized_data = [(x - mean) / std_dev for x in data]
    return standardized_data
# Example data
data = [10, 20, 30, 40, 50]
standardized_data = zScoreNormalization(data)
print("Standardized data (Z-Score Normalization):", standardized_data)
Standardized data (Z-Score Normalization): [-1.414213562373095, -0.7071067811865475, 0.0, 0.7071067811865475, 1.414213562373095]
from sklearn.preprocessing import StandardScaler

# Sample data
data = np.array([[1, 2, 3], [3, 4, 5], [5, 6, 7]])

# Create the scaler
scaler = StandardScaler()

# Fit the scaler to the data and transform the data
standardized_data = scaler.fit_transform(data)

print("Original data:")
print(data)
print("\nStandardized data:")
print(standardized_data)
Original data:
[[1 2 3]
 [3 4 5]
 [5 6 7]]

Standardized data:
[[-1.22474487 -1.22474487 -1.22474487]
 [ 0.          0.          0.        ]
 [ 1.22474487  1.22474487  1.22474487]]

Which one?

The choice between normalization and standardization depends on your data and the requirements of your analysis. Here are some guidelines to help you decide:

  1. Normalization:
    • Use normalization when the scale of features is meaningful and should be preserved.
    • Normalize data when you’re working with algorithms that require input features to be on a similar scale, such as algorithms using distance metrics like k-nearest neighbors or clustering algorithms like K-means.
    • If the distribution of your data is not Gaussian and you want to scale the features to a fixed range, normalization might be a better choice.
  2. Standardization:
    • Use standardization when the distribution of your data is Gaussian or when you’re unsure about the distribution.
    • Standardization is less affected by outliers compared to normalization, making it more suitable when your data contains outliers.
    • If you’re working with algorithms that assume your data is normally distributed, such as linear regression, logistic regression, standardization is typically preferred.

In some cases, you might experiment with both approaches and see which one yields better results for your specific dataset and analysis. Additionally, it’s always a good practice to understand your data and the underlying assumptions of the algorithms you’re using to make informed decisions about data preprocessing techniques.

Back to top